196 research outputs found

    Communication-Efficient Algorithms For Distributed Optimization

    Full text link
    This thesis is concerned with the design of distributed algorithms for solving optimization problems. We consider networks where each node has exclusive access to a cost function, and design algorithms that make all nodes cooperate to find the minimum of the sum of all the cost functions. Several problems in signal processing, control, and machine learning can be posed as such optimization problems. Given that communication is often the most energy-consuming operation in networks, it is important to design communication-efficient algorithms. The main contributions of this thesis are a classification scheme for distributed optimization and a set of corresponding communication-efficient algorithms. The class of optimization problems we consider is quite general, since each function may depend on arbitrary components of the optimization variable, and not necessarily on all of them. In doing so, we go beyond the common assumption in distributed optimization and create additional structure that can be used to reduce the number of communications. This structure is captured by our classification scheme, which identifies easier instances of the problem, for example the standard distributed optimization problem, where all functions depend on all the components of the variable. In our algorithms, no central node coordinates the network, all the communications occur between neighboring nodes, and the data associated with each node is processed locally. We show several applications including average consensus, support vector machines, network flows, and several distributed scenarios for compressed sensing. We also propose a new framework for distributed model predictive control. Through extensive numerical experiments, we show that our algorithms outperform prior distributed algorithms in terms of communication-efficiency, even some that were specifically designed for a particular application.Comment: Thesis defended on October 10, 2013. Dual PhD degree from Carnegie Mellon University, PA, and Instituto Superior T\'ecnico, Lisbon, Portuga

    Distributed Optimization With Local Domains: Applications in MPC and Network Flows

    Full text link
    In this paper we consider a network with PP nodes, where each node has exclusive access to a local cost function. Our contribution is a communication-efficient distributed algorithm that finds a vector x⋆x^\star minimizing the sum of all the functions. We make the additional assumption that the functions have intersecting local domains, i.e., each function depends only on some components of the variable. Consequently, each node is interested in knowing only some components of x⋆x^\star, not the entire vector. This allows for improvement in communication-efficiency. We apply our algorithm to model predictive control (MPC) and to network flow problems and show, through experiments on large networks, that our proposed algorithm requires less communications to converge than prior algorithms.Comment: Submitted to IEEE Trans. Aut. Contro

    D-ADMM: A Communication-Efficient Distributed Algorithm For Separable Optimization

    Full text link
    We propose a distributed algorithm, named Distributed Alternating Direction Method of Multipliers (D-ADMM), for solving separable optimization problems in networks of interconnected nodes or agents. In a separable optimization problem there is a private cost function and a private constraint set at each node. The goal is to minimize the sum of all the cost functions, constraining the solution to be in the intersection of all the constraint sets. D-ADMM is proven to converge when the network is bipartite or when all the functions are strongly convex, although in practice, convergence is observed even when these conditions are not met. We use D-ADMM to solve the following problems from signal processing and control: average consensus, compressed sensing, and support vector machines. Our simulations show that D-ADMM requires less communications than state-of-the-art algorithms to achieve a given accuracy level. Algorithms with low communication requirements are important, for example, in sensor networks, where sensors are typically battery-operated and communicating is the most energy consuming operation.Comment: To appear in IEEE Transactions on Signal Processin

    Distributed Basis Pursuit

    Full text link
    We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the least L1-norm solution of the underdetermined linear system Ax = b and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a sensor network, and is designed to minimize the communication between nodes. The algorithm only requires the network to be connected, has no notion of a central processing node, and no node has access to the entire matrix A at any time. We consider two scenarios in which either the columns or the rows of A are distributed among the compute nodes. Our algorithm, named D-ADMM, is a decentralized implementation of the alternating direction method of multipliers. We show through numerical simulation that our algorithm requires considerably less communications between the nodes than the state-of-the-art algorithms.Comment: Preprint of the journal version of the paper; IEEE Transactions on Signal Processing, Vol. 60, Issue 4, April, 201

    Measurement-Consistent Networks via a Deep Implicit Layer for Solving Inverse Problems

    Full text link
    End-to-end deep neural networks (DNNs) have become state-of-the-art (SOTA) for solving inverse problems. Despite their outstanding performance, during deployment, such networks are sensitive to minor variations in the training pipeline and often fail to reconstruct small but important details, a feature critical in medical imaging, astronomy, or defence. Such instabilities in DNNs can be explained by the fact that they ignore the forward measurement model during deployment, and thus fail to enforce consistency between their output and the input measurements. To overcome this, we propose a framework that transforms any DNN for inverse problems into a measurement-consistent one. This is done by appending to it an implicit layer (or deep equilibrium network) designed to solve a model-based optimization problem. The implicit layer consists of a shallow learnable network that can be integrated into the end-to-end training. Experiments on single-image super-resolution show that the proposed framework leads to significant improvements in reconstruction quality and robustness over the SOTA DNNs

    Single Image Super-Resolution via CNN Architectures and and TV-TV Minimization

    Get PDF
    Super-resolution (SR) is a technique that allows increasing the resolution of a given image. Having applications in many areas, from medical imaging to consumer electronics, several SR methods have been proposed. Currently, the best performing methods are based on convolutional neural networks (CNNs) and require extensive datasets for training. However, at test time, they fail to impose consistency between the super-resolved image and the given low-resolution image, a property that classic reconstruction-based algorithms naturally enforce in spite of having poorer performance. Motivated by this observation, we propose a new framework that joins both approaches and produces images with superior quality than any of the prior methods. Although our framework requires additional computation, our experiments on Set5, Set14, and BSD100 show that it systematically produces images with better peak signal to noise ratio (PSNR) and structural similarity (SSIM) than the current state-of-the-art CNN architectures for SR.Comment: Accepted to BMVC 2019; v2 contains updated results and minor bug fixe

    X-ray image separation via coupled dictionary learning

    Get PDF
    In support of art investigation, we propose a new source sepa- ration method that unmixes a single X-ray scan acquired from double-sided paintings. Unlike prior source separation meth- ods, which are based on statistical or structural incoherence of the sources, we use visual images taken from the front- and back-side of the panel to drive the separation process. The coupling of the two imaging modalities is achieved via a new multi-scale dictionary learning method. Experimental results demonstrate that our method succeeds in the discrimination of the sources, while state-of-the-art methods fail to do so.Comment: To be presented at the IEEE International Conference on Image Processing (ICIP), 201

    A unified algorithmic approach to distributed optimization

    Full text link
    We address general optimization problems formulated on networks. Each node in the network has a function, and the goal is to find a vec-tor x ∈ Rn that minimizes the sum of all the functions. We assume that each function depends on a set of components of x, not neces-sarily on all of them. This creates additional structure in the prob-lem, which can be captured by the classification scheme we develop. This scheme not only to enables us to design an algorithm that solves very general distributed optimization problems, but also allows us to categorize prior algorithms and applications. Our general-purpose algorithm shows a performance superior to prior algorithms, includ-ing algorithms that are application-specific. Index Terms — Distributed optimization, sensor networks 1
    • …
    corecore